feat(c/sedona-sedona-libgpuspatial): Refactoring GPU Spatial Join Library#556
feat(c/sedona-sedona-libgpuspatial): Refactoring GPU Spatial Join Library#556pwrliang wants to merge 6 commits intoapache:mainfrom
Conversation
|
As you wait for a proper review, I'll suggest that you consider breaking this into multiple PRs. It looks like you described 4 separate changes (the bullets) that would tend nicely to at least 4 isolated PRs. I know it's more work for you, but it reduces the review burden significantly, as reviewers don't need to figure out which of the 4 bullets a particular code change applies to. You can always base branches off of each other to reuse work from your other branches. Something like below, or however you see fit. Doing a separate PR for Header Renaming and straightforward changes would cut down the 88-file diff significantly, and could even be reviewed by people with less context about GPU join (like me). Separate PRs could help your changes land faster, and help with future debugging / understanding when someone tries to figure out what happened. WDYT? Would breaking this up be reasonable? |
@pwrliang kindly split this one out from the larger parent PR at my request...you are absolutely correct that 88 files and 6000 lines is a huge diff and those are excellent suggestions on how to split that up further. I do plan to attempt reviewing this tomorrow; however, we do need to respect that this development is happening in a public repository with a community and in the future (or now, if Peter or other members of the community are blocked from participating because of this PR's size) we will need to have changes be incremental. We want this code in SedonaDB not just because it is awesome but because we want to be the place where future contributors add CUDS-accellerated spatial functions, and to do that we'll need to work in the open and incrementally. My idea with scoping this to only include code in |
To be clear, I'm not dying to participate 😅. Definitely too busy for that. Just figured I'd encourage making it more modular, for both the purposes of reviewing and for cleaner traceability and git history. Though yeah, the fact that this is isolated to
I guess my point is that it looks like it could be broken up even more (primarily based on the PR description). But if you're all good with reviewing it as it is, go for it! I don't mean to stand in the way. |
|
Thanks for your suggestions. For the future PRs, I will make incremental changes to make them easy to review. |
paleolimbot
left a comment
There was a problem hiding this comment.
Thank you for working on this, and apologies for taking so long to review it! I promise all of my comments are just details...there just happens to be a lot of code in this PR and so there are also a lot of details. Generally:
- This is great! I really appreciate the test coverage and the new C interface that aligns more readily with the way our existing spatial join works. I'm glad to know this approach is also faster
- There are a few things around iterating over WKB arrays that could be faster. It is OK if you don't want to explore ways to make this faster in this PR...I was just here and looking at your code and so I put a comment if I noticed something that could be faster. I will be looking at making it easier to do this sort of thing in geoarrow-c soon because I need it for Geography support (also a binding to a C++ library) and at least now I know that functions like memory size estimates or rechunking are worth building in when I'm there.
- I do think we should deduplicate the three tests where you compare results to the GEOS-calculated version. The one that uses GEOS C++ is the cleanest and perhaps the other two could use that approach as well. It is also OK if they are testing the same bit of code to have them be one test.
- A few minor comments on the C API
- Removing the Rust section of this PR (e.g., by just deleting everything from the Rust wrapper crate except the bindgen bindings) will make it easier to merge this PR and review the Rust code separately (see inline comment).
c/sedona-libgpuspatial/libgpuspatial/include/gpuspatial/loader/parallel_wkb_loader.hpp
Outdated
Show resolved
Hide resolved
c/sedona-libgpuspatial/libgpuspatial/include/gpuspatial/loader/parallel_wkb_loader.hpp
Outdated
Show resolved
Hide resolved
c/sedona-libgpuspatial/libgpuspatial/include/gpuspatial/loader/parallel_wkb_loader.hpp
Outdated
Show resolved
Hide resolved
c/sedona-libgpuspatial/libgpuspatial/include/gpuspatial/loader/parallel_wkb_loader.hpp
Show resolved
Hide resolved
c/sedona-libgpuspatial/libgpuspatial/include/gpuspatial/refine/spatial_refiner.hpp
Outdated
Show resolved
Hide resolved
| int (*push_build)(struct SedonaSpatialRefiner* self, | ||
| const struct ArrowSchema* build_schema, | ||
| const struct ArrowArray* build_array); |
There was a problem hiding this comment.
Is it possible to add a callback int (*init_schema)(struct ArrowSchema* build_schema, struct ArrowSchema* probe_schema) and eliminate all the const struct ArrowSchema* arguments for the other callbacks?
I know this doesn't help you in Rust because arrow-rs likes to export the schema whether you want it or not. But this might be fixed at some point and from a C API standpoint it's much cleaner.
There was a problem hiding this comment.
I think passing a pair of <ArrowSchema, ArrowArray> could be more flexible as it allows loading arrays with different schemas, e.g., push a Binary array first and push another BinaryView array later.
| uint32_t* build_indices, uint32_t* probe_indices, | ||
| uint32_t indices_size, uint32_t* new_indices_size); |
There was a problem hiding this comment.
I noticed in a recent PR that @Kontinuation used 64-bit indices for the build side but 32-bit indices for the probe side. I am guessing that here the extra 4 bytes per row probably makes a difference because you have to shuffle them to and from the GPU, but if it doesn't, using a uint64 for the build side might better align with Kristin's join code.
There was a problem hiding this comment.
I guess you refer to batch refinement. Kristin's changes are compatible with the current interface design. https://github.com/zhangfengcdt/sedona-db/blob/bba62f552317d0e8f61b5af3fcbfd1c916296628/rust/sedona-spatial-join/src/index/gpu_spatial_index.rs#L180
| int (*refine)(struct SedonaSpatialRefiner* self, const struct ArrowSchema* schema1, | ||
| const struct ArrowArray* array1, const struct ArrowSchema* schema2, | ||
| const struct ArrowArray* array2, | ||
| enum SedonaSpatialRelationPredicate predicate, uint32_t* indices1, | ||
| uint32_t* indices2, uint32_t indices_size, uint32_t* new_indices_size); |
There was a problem hiding this comment.
Some documentation for these parameters would be helpful (are these schemas the same as probe/build?)
There was a problem hiding this comment.
This interface is currently unused, so I have removed it.
| struct Payload { | ||
| GEOSContextHandle_t handle; | ||
| const GEOSGeometry* geom; | ||
| std::vector<uint32_t> build_indices; | ||
| std::vector<uint32_t> stream_indices; | ||
| SedonaSpatialRelationPredicate predicate; | ||
| }; | ||
|
|
There was a problem hiding this comment.
I believe a lot of this code also exists in another test? Is it worth testing the C wrapper in that test as well instead of duplicating the code that calculates the "correct" answer or consolidating the three places where that is currently done into some common code?
This PR introduces a major refactoring of the GPU-accelerated library. Compared to the previous design, this version decouples the spatial join into distinct filtering and refinement stages, making it easier to integrate it into rust/sedona-spatial-join in an upcoming PR. Additionally, this update includes performance optimizations and minor structural improvements: